Reflections on the All4People Summit 2025 – Advancing Ethical AI Governance

Reflections on the All4People Summit 2025 – Advancing Ethical AI Governance

The All4People Summit 2025 arrived at a pivotal moment in the global conversation around AI governance. Coinciding with the release of the AI4People Playbook, the event brought together leaders from academia, industry, policy and civil society. What stood out across the opening panel, the Playbook launch, a working session on building a trustworthy AI-enabled future and discussions on global governance. was not simply expertise, but a sense of shared responsibility and urgency.

Despite the range of backgrounds represented, the room felt aligned on one point, ethical AI governance is ultimately a question about the kind of society we want to build.

Governing AI as Governing Ourselves

Professor Virginia Dignum’s framing captured this perfectly. Two remarks deeply resonated with me:

‘Every AI system is a reflection of society.’
‘When we govern AI, we are governing ourselves.’

These ideas cut through both the hype and the fatalism surrounding AI. They reject the common narrative that imagines AI as an autonomous threat, a kind of ‘Skynet’ waiting to slip its leash. Instead, her comments located risk in a much more grounded space. They spoke to the incentives we create, the safeguards we choose and the values we embed into our institutions.

If AI reflects society, then the governance debate is not about controlling an alien intelligence. It is about holding a mirror up to the systems we have already built and asking whether they produce the outcomes we want.

Beyond the AI Act: The Case for Pragmatism

Dame Wendy Hall added an important complement to this perspective. While the EU AI Act continues to shape global regulatory thinking, she argued that we now need more pragmatic approaches that are adaptable across regions, sectors and values.

This is the unglamorous reality of governance, in that principles only matter if they can be implemented. Different countries have different regulatory cultures, different institutions and different tolerances for risk. Effective AI governance must therefore be both principled and practical. The summit made clear that flexibility does not mean dilution. Rather, it means recognising that governance is a living ecosystem, not a static rulebook.

Trust as a Socio-Technical Construct

One of the most rewarding sessions explored what it means to build a trustworthy AI-enabled future. A socio-technical perspective dominated the conversation, emphasising that trust is never just a property of technology.

Trust emerges from the interplay of:

  • The values a system is built upon

  • The processes through which it operates

  • The outcomes it produces in the real world.

A helpful distinction emerged between reliability (the system works as intended) and trustworthiness (the system aligns with societal expectations and norms). Trustworthiness was described as a ‘meta-value’ that rests on recognised societal values rather than technical specifications.

A framework surfaced repeatedly during discussions, a triangle of:

  1. Framing - Defining what trust means in context

  2. Measurement - Identifying indicators that genuinely reflect trustworthiness

  3. Interventions - Designing mechanisms that reinforce desired behaviours.

When these three elements align, trust can become operational rather than aspirational. When they do not, governance risks becoming symbolic rather than effective.

Closing Reflections

The Summit reinforced a truth that is easy to overlook amid rapid technological change, illuminating that ethical AI governance is not about taming technology, but rather It is about steering human institutions, aligning incentives with values and ensuring that innovation does not outpace our capacity for responsibility.

The energy and commitment at the event was inspiring. More importantly, however, they were grounded in realism. No one claimed that the path forward will be easy. What was claimed however, and what I left believing, is that shaping a trustworthy AI future is both possible and necessary, provided we are willing to confront our own assumptions and build governance frameworks that reflect the societies we hope to become.

 

Previous
Previous

A Necessary Pause: Responsibility, AI, and the Year Ahead